67 research outputs found
Automatic Parameter Adaptation for Multi-object Tracking
Object tracking quality usually depends on video context (e.g. object
occlusion level, object density). In order to decrease this dependency, this
paper presents a learning approach to adapt the tracker parameters to the
context variations. In an offline phase, satisfactory tracking parameters are
learned for video context clusters. In the online control phase, once a context
change is detected, the tracking parameters are tuned using the learned values.
The experimental results show that the proposed approach outperforms the recent
trackers in state of the art. This paper brings two contributions: (1) a
classification method of video sequences to learn offline tracking parameters,
(2) a new method to tune online tracking parameters using tracking context.Comment: International Conference on Computer Vision Systems (ICVS) (2013
Online Tracking Parameter Adaptation based on Evaluation
Parameter tuning is a common issue for many tracking algorithms. In order to
solve this problem, this paper proposes an online parameter tuning to adapt a
tracking algorithm to various scene contexts. In an offline training phase,
this approach learns how to tune the tracker parameters to cope with different
contexts. In the online control phase, once the tracking quality is evaluated
as not good enough, the proposed approach computes the current context and
tunes the tracking parameters using the learned values. The experimental
results show that the proposed approach improves the performance of the
tracking algorithm and outperforms recent state of the art trackers. This paper
brings two contributions: (1) an online tracking evaluation, and (2) a method
to adapt online tracking parameters to scene contexts.Comment: IEEE International Conference on Advanced Video and Signal-based
Surveillance (2013
A multi-feature tracking algorithm enabling adaptation to context variations
We propose in this paper a tracking algorithm which is able to adapt itself
to different scene contexts. A feature pool is used to compute the matching
score between two detected objects. This feature pool includes 2D, 3D
displacement distances, 2D sizes, color histogram, histogram of oriented
gradient (HOG), color covariance and dominant color. An offline learning
process is proposed to search for useful features and to estimate their weights
for each context. In the online tracking process, a temporal window is defined
to establish the links between the detected objects. This enables to find the
object trajectories even if the objects are misdetected in some frames. A
trajectory filter is proposed to remove noisy trajectories. Experimentation on
different contexts is shown. The proposed tracker has been tested in videos
belonging to three public datasets and to the Caretaker European project. The
experimental results prove the effect of the proposed feature weight learning,
and the robustness of the proposed tracker compared to some methods in the
state of the art. The contributions of our approach over the state of the art
trackers are: (i) a robust tracking algorithm based on a feature pool, (ii) a
supervised learning scheme to learn feature weights for each context, (iii) a
new method to quantify the reliability of HOG descriptor, (iv) a combination of
color covariance and dominant color features with spatial pyramid distance to
manage the case of object occlusion.Comment: The International Conference on Imaging for Crime Detection and
Prevention (ICDP) (2011
Robust Mobile Object Tracking Based on Multiple Feature Similarity and Trajectory Filtering
This paper presents a new algorithm to track mobile objects in different
scene conditions. The main idea of the proposed tracker includes estimation,
multi-features similarity measures and trajectory filtering. A feature set
(distance, area, shape ratio, color histogram) is defined for each tracked
object to search for the best matching object. Its best matching object and its
state estimated by the Kalman filter are combined to update position and size
of the tracked object. However, the mobile object trajectories are usually
fragmented because of occlusions and misdetections. Therefore, we also propose
a trajectory filtering, named global tracker, aims at removing the noisy
trajectories and fusing the fragmented trajectories belonging to a same mobile
object. The method has been tested with five videos of different scene
conditions. Three of them are provided by the ETISEO benchmarking project
(http://www-sop.inria.fr/orion/ETISEO) in which the proposed tracker
performance has been compared with other seven tracking algorithms. The
advantages of our approach over the existing state of the art ones are: (i) no
prior knowledge information is required (e.g. no calibration and no contextual
models are needed), (ii) the tracker is more reliable by combining multiple
feature similarities, (iii) the tracker can perform in different scene
conditions: single/several mobile objects, weak/strong illumination,
indoor/outdoor scenes, (iv) a trajectory filtering is defined and applied to
improve the tracker performance, (v) the tracker performance outperforms many
algorithms of the state of the art
Object Tracking in Videos: Approaches and Issues
International audienceMobile object tracking has an important role in the computer vision applications. In this paper, we use a tracked target-based taxonomy to present the object tracking algorithms. The tracked targets are divided into three categories: points of interest, appearance and silhouette of mobile objects. Advantages and limitations of the tracking approaches are also analyzed to find the future directions in the object tracking domain
Adaptive Neuro-Fuzzy Controller for Multi-Object Tracker
International audienceSensitivity to scene such as contrast and illumination intensity, is one of the factors significantly affecting the performance of object trackers. In order to overcome this issue, tracker parameters need to be adapted based on changes in contextual information. In this paper, we propose an intelligent mechanism to adapt the tracker parameters, in a real-time and online fashion. When a frame is processed by the tracker, a controller extracts the contextual information, based on which it adapts the tracker parameters for successive frames. The proposed controller relies on a learned neuro-fuzzy inference system to find satisfactory tracker parameter values. The proposed approach is trained on nine publicly available benchmark video data sets and tested on three unrelated video data sets. The performance comparison indicates clear tracking performance improvement in comparison to tracker with static parameter values, as well as other state-of-the art trackers
Feature Matching using Co-inertia Analysis for People Tracking
International audienceRobust object tracking is a challenging computer vision problem due to dynamic changes in object pose, illumination, appearance and occlusions. Tracking objects between frames requires accurate matching of their features. We investigate real time matching of mobile object features for frame to frame tracking. This paper presents a new feature matching approach between objects for tracking that incorporates one of the multivariate analysis method called Co-Inertia Analysis abbreviated as COIA. This approach is being introduced to compute the similarity between Histogram of Oriented Gradients (HOG) features of the tracked objects. Experiments conducted shows the effectiveness of this approach for mobile object feature tracking
Robust Global Tracker based on an Online Estimation of Tracklet Descriptor Reliability
International audienceThe complex scene conditions such as light change, high density of mobile objects or object occlusion can cause object mis-detections. When a tracker can not recover these mis-detections, the trajectory of an object is fragmented into some short trajectories called tracklets. As a result, tracking quality is reduced remarkably. In this paper, we propose a new approach to improve the tracking quality by a global tracker which merges all tracklets belonging to an object in the whole video. Particularly, we compute descriptor reliability over time based on their discrimination. On the other hand, a motion model is also combined with appearance descriptors in a flexible way to improve the tracking quality. The proposed approach is evaluated on four benchmark datasets. The obtained results show the robustness and effectiveness of our approach compared to tracking as well as tracklet linking approaches from state of the art
Online Parameter Tuning for Object Tracking Algorithms
International audienceObject tracking quality usually depends on video scene conditions (e.g. illumination, density of objects, object occlusion level). In order to overcome this limitation, this article presents a new control approach to adapt the object tracking process to the scene condition variations. More precisely, this approach learns how to tune the tracker parameters to cope with the tracking context variations. The tracking context, or context, of a video sequence is defined as a set of six features: density ofmobile objects, their occlusion level, their contrastwith regard to the surrounding background, their contrast variance, their 2D area and their 2D area variance. In an offline phase, training video sequences are classified by clustering their contextual features. Each context cluster is then associated to satisfactory tracking parameters. In the online control phase, once a context change is detected, the tracking parameters are tuned using the learned values. The approach has been experimentedwith three different tracking algorithms and on long, complex video datasets. This article brings two significant contributions: (1) a classification method of video sequences to learn offline tracking parameters and (2) a newmethod to tune online tracking parameters using tracking context
Automatic Tracker Selection w.r.t Object Detection Performance
International audienceThe tracking algorithm performance depends on video content. This paper presents a new multi-object tracking approach which is able to cope with video content variations. First the object detection is improved using Kanade- Lucas-Tomasi (KLT) feature tracking. Second, for each mobile object, an appropriate tracker is selected among a KLT-based tracker and a discriminative appearance-based tracker. This selection is supported by an online tracking evaluation. The approach has been experimented on three public video datasets. The experimental results show a better performance of the proposed approach compared to recent state of the art trackers
- …